Goto

Collaborating Authors

 evaluate model-based behavior


The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning -- Supplementary Material -- AT abular Experiments

Neural Information Processing Systems

Here, we discuss some additional settings for the tabular experiments. The reason for this is that Sarsa(0.95), in contrast to MB-VI and MB-SU, is a multi-step Therefore, there is stochasticity in the update target even in deterministic environments due to exploration of the behavior policy. All methods used optimistic initialization. The pseudocode of the tabular, on-policy method used in Section 5.1 is shown in Algorithm 1. These estimates are updated at the end of the episode, using the data gathered during the episode.


Review for NeurIPS paper: The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

Neural Information Processing Systems

Weaknesses: – Sec 3: You method strongly depends on the'top-terminal fraction'. I see multiple potential problems: 1) what worries me most, is that it only measures optimality. What if my model-based agent adapts very fast to the new domain but reaches just below optimal performance. Then my MBRL method can be very effective, but the LoCA regret will still be very large. Note that the regret at the bottom of P4 cannot correct for this, as it sums all timesteps and multiplies with the success fraction), 3) in more complicated tasks, it can be hard to determine the optimal behaviour, i.e., to even define the'top-terminal fraction'.


Review for NeurIPS paper: The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

Neural Information Processing Systems

This paper proposes a method for identifying model-based behavior in RL agents (the "LoCA regret"), which can be used without knowing anything about the internal structure of the agent itself. This method is demonstrated to correctly distinguish between classical known model-free and model-based agents. It is also used to analyze MuZero, revealing that although MuZero is in principle a model-based algorithm, it does not make optimal use of its model. The reviewers agreed that the LoCA regret is a useful metric, and felt that doing careful evaluation of agents by designing metrics like this is an important area of research in RL. I agree, and found very interesting the demonstration that just because a particular algorithm makes use of a model, doesn't necessarily mean that the algorithm will have the properties that we think of as being associated with model-based algorithms. While there was some debate during the discussion period about some of the choices regarding the calculation of the LoCA regret (e.g.


The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

Neural Information Processing Systems

Deep model-based Reinforcement Learning (RL) has the potential to substantially improve the sample-efficiency of deep RL. While various challenges have long held it back, a number of papers have recently come out reporting success with deep model-based methods. This is a great development, but the lack of a consistent metric to evaluate such methods makes it difficult to compare various approaches. For example, the common single-task sample-efficiency metric conflates improvements due to model-based learning with various other aspects, such as representation learning, making it difficult to assess true progress on model-based RL. To address this, we introduce an experimental setup to evaluate model-based behavior of RL methods, inspired by work from neuroscience on detecting model-based behavior in humans and animals.